Click Here!
home account info subscribe login search My ITKnowledge FAQ/help site map contact us


 
Brief Full
 Advanced
      Search
 Search Tips
To access the contents, click the chapter and section titles.

Oracle Performance Tuning and Optimization
(Publisher: Macmillan Computer Publishing)
Author(s): Edward Whalen
ISBN: 067230886x
Publication Date: 04/01/96

Bookmark It

Search this book:
 
Previous Table of Contents Next


You can use PCM locks to lock data blocks for reading or for updating. If a PCM lock is used as a read-lock, other instances can acquire read-locks on the same data blocks. It is only when updating that an exclusive lock must be acquired.

PCM locks are allocated to data files; as such, they give you some flexibility over the configuration of the locks. A PCM lock locks one or more data blocks, depending on the number of PCM locks allocated to the data file and the size of the data file. Because there is an inherent overhead associated with PCM locks, it is not beneficial to overconfigure the locks.

If you know your data-access patterns, you can configure your system based on these general rules:

  Partition work between servers. Try to balance the systems so that users accessing the same table reside on the same computer. This arrangement reduces lock contention between machines. By segmenting the work, you can reduce the amount of lock traffic. Remember that once a lock is acquired, it is released only when another system needs to lock that data.
  Put lots of PCM locks on tables with heavy update traffic. If you have lots of updates, you can benefit from lowering the blocks-per-lock ratio. By increasing the number of locks, you increase overhead—but by having fewer blocks per lock, you may cut down on the percentage of locks with contention.
  Use PCTFREE and PCTUSED to cause less rows per block on high-contention tables. By doing this and decreasing the number of blocks per lock, you reduce the lock contention—at the cost of more locks and more space required.
  Put fewer locks on read tables. If you have tables that are mostly read, use fewer PCM locks. Read locks are not exclusive; the reduction in locks cuts down on interconnect traffic.
  Partition indexes to separate tablespaces. Because indexes are mostly read, you can benefit by needing fewer PCM locks. By segmenting the tables, you can put fewer PCM locks on the index tables and more on the data tables.

The dynamic performance tables V$BH, V$CACHE, and V$PING contain information about the frequency of PCM lock contention. By looking at the FREQUENCY column in these tables, you can get an idea of the number of times lock conversions took place because of contention between instances.

The dynamic performance table, V$LOCK_ACTIVITY, gives information on all types of PCM lock conversions. From this information, you can see whether a particular instance is seeing a dramatic change in lock activity. An increase in lock activity may indicate that you don’t have a sufficient number of PCM locks on that instance. With this information, you can use the V$BH, V$CACHE, and V$PING tables to identify the problem area.

The Parallel Server option can be effective if your application is partitionable. If all the users in your system must access the same data, a parallel server may not be for you. But if you can partition your workload into divisions based on table access or if you need a fault-tolerant configuration, the Parallel Server option may be for you.

If you use the Parallel Server option, you must take special care to properly configure the system. By designing the system properly, you can take maximum advantage of the parallel server features.

Spin Counts

Multiprocessor environments may benefit by tuning the parameter SPIN_COUNT. In normal circumstances, if a latch is not available, the process sleeps for a while and wakes up to try the latch again. Because a latch is a low-level lock, a process does not hold it very long.

If you are on a multiprocessor system, it is likely that the process holding the latch is currently processing on another CPU and will be finished in a short time. By setting SPIN_COUNT to a value greater than zero, the process spins while counting down from SPIN_COUNT to zero. If the latch is still not available, the process goes to sleep.

The time it takes to perform one spin count depends on the speed of your computer. On a faster machine, one spin count is quicker than one spin on a slower machine. However, the other process that is holding the resource will also be processing at a faster rate. The spin function itself only executes a few instructions, and is therefore very fast.

Enabling spin counts has some good points and some bad points. You may increase CPU usage by setting SPIN_COUNT and thus slow the entire system. Because sleeping is a somewhat expensive operation, sometimes you get the latch in fewer CPU cycles than the sleep-and-wake-up cycle would take and thus improve performance.

If the process spins and then must go to sleep anyway, there is definitely a performance loss. By carefully monitoring the system, you may be able to find an optimal value for SPIN_COUNT.

Monitor the statistics MISSES and SLEEPS in the dynamic performance table V$LATCH. If the SLEEPS value goes down, you are on the right track. If you never see any sleeps, it is not even necessary to tune SPIN_COUNT.


NOTE:  The SPIN_COUNT initialization parameter is not available on all operating systems. Check your OS-specific documentation to see whether it is available on your system.


Previous Table of Contents Next


Products |  Contact Us |  About Us |  Privacy  |  Ad Info  |  Home

Use of this site is subject to certain Terms & Conditions, Copyright © 1996-2000 EarthWeb Inc.
All rights reserved. Reproduction whole or in part in any form or medium without express written permission of EarthWeb is prohibited.